In this project, I created a physics simulation using Verlet integration implemented in C++ within Unreal Engine 5. While Unreal Engine was primarily used for rendering, the physics calculations were entirely custom, focusing on building a simple yet effective system to simulate motion and basic collision detection and resolution. This hands-on approach allowed me to explore the nuances of Verlet integration, a method widely appreciated for its stability and simplicity in real-time simulations.
The motivation behind this project was to deepen my understanding of physics systems and how they integrate into game development. I was particularly interested in the fundamental aspects of motion simulation and collision detection/solving, as these are critical building blocks for many interactive systems. By implementing these features myself, I gained valuable insights into their mathematical foundations and practical challenges, while also learning how to optimize their performance for real-time use.
This experiment not only improved my skills in physics programming but also highlighted the challenges and rewards of building a system from the ground up. Leveraging Unreal Engine 5 for rendering allowed me to focus on crafting the physics logic while presenting the results in a visually engaging environment. It was a rewarding step in combining technical problem-solving with creative design.
The physics engine implementation for this project was designed with simplicity as the primary focus. By using Verlet integration, I avoided the complexity of traditional numerical solvers while maintaining stability and accuracy for simulating basic motion. The system handles particle-based objects, where each particle's position is updated based on its previous position and applied forces, creating a natural and intuitive simulation of movement. This straightforward approach allowed me to focus on core principles such as motion simulation and collision resolution, providing a solid foundation for understanding physics integration without overwhelming intricacy.
The core of the engine is represented by the solver class (derived from AActor to be easily placed in the scene) and the particles data.
FParticlesData is a simple struct holding data for each particle. I've decided to represent each particle with simplest data possible to avoid overhead that would come from using the UObject class hierarchy. Most importantly I've avoided expensive garbage collection. Storing everything in continguous arrays helps to reduce the cache misses as we can increase the usage of data stored temporarily in CPU cache.
SolverActor is more complex object which operates on particles data. It also owns and controls other key systems such as collision solver and renderer. It's responsible for initialization of all core components which happens on begin play.
Every frame, solver actor advances the simulation. We can increase the number of substeps to get a more stable simulation at the additional CPU cost.
Physics integration is implemented in the UpdateSolver function. It's responsible for using verlet integration formula to update each particle's postion based on current positon, previous position, acceleration and delta time.
It's also updating particle's collision using one of the available collision solvers. Internally they will perform broad and narrow collision checks and will apply displacement where appriopriate.
The collision solver in my application is implemented as a lightweight class derived from UObject, making it more efficient than using a full AActor. This design choice ensures better performance while maintaining the flexibility of Unreal Engine's object hierarchy. The solver is owned and managed by an ASolverActor, which acts as the central controller for physics interactions in the scene. The solver's primary role is to perform both broad and narrow collision checks, optimizing the computational workload by first identifying potential collision pairs (broad phase) and then evaluating precise collision details for those pairs (narrow phase). This two-step approach ensures scalability and efficiency, even as the number of particles increases.
Once collisions are detected, the solver resolves them by applying displacement to each involved particle. The displacement is calculated to separate overlapping particles while preserving physical plausibility and stability. The resolution process takes into account each particle's mass, radius, position, direction and collision response, ensuring smooth and realistic motion. By coupling this logic with the Verlet integration method, the solver achieves stable collision handling without introducing unnecessary complexity. This streamlined system is well-suited for the simulation's focus on simple motion and interaction, providing a solid balance between accuracy and performance.
Base solver class provides functions for checking and resolving collision between the 2 particles. The specifics of broad and narrow collision phases is left up to the concrete solver classes.
In this application there are 3 solvers we can use: NaiveSolver, PointHashGridSolver and LightGridSolver.
One of the collision solvers in my application is a naive solver, which checks every particle for possible collisions against every other particle. While simple to implement, this approach is inherently slow because it has a time complexity of O(n2), where n is the number of particles in the simulation. As the number of particles increases, the number of pairwise checks grows quadratically, leading to significant performance bottlenecks, especially for large systems. This inefficiency arises because the naive solver does not leverage spatial partitioning or other optimizations to reduce unnecessary checks between particles that are far apart and unlikely to collide. As a result, while this solver is useful for testing and small-scale simulations, it becomes impractical for more complex or large-scale applications.
Another collision solver in my application uses a Point Hash Grid for spatial partitioning, significantly improving efficiency compared to the naive approach. This method organizes particles into a grid structure based on their spatial positions, allowing collisions to be checked only between particles within the same or neighboring grid cells. This dramatically reduces the number of pairwise checks, as particles that are far apart are automatically excluded from the collision detection process. The time complexity of this method is approximately O(n+k), where n is the number of particles and k is the number of potential collisions within the grid. This makes it far more scalable than the naive O(n2) approach, especially in simulations with large numbers of particles.
My implementation is based on FPointHashGrid2D, a utility provided by Unreal Engine that offers a robust framework for implementing 2D spatial hash grids. Spatial hash grids are widely used in game development for efficient spatial queries, particularly in physics engines and pathfinding systems. In my implementation, particles are assigned to grid cells based on their positions, and only particles within a given cell and its immediate neighbors are considered for collision detection. This approach strikes an excellent balance between simplicity and performance, leveraging the strengths of Unreal's tools while tailoring the system to the specific needs of the simulation. The result is a collision solver that is both efficient and well-suited for real-time applications, enabling smooth and responsive interactions in scenarios with a high density of particles.
Third solver has been implemented with LightGrid used as the spatial structure for efficient spatial queries.
The rendering system for my physics simulation is implemented as a simple UObject that owns a UNiagaraComponent. This design allows the renderer to efficiently prepare and send particle data to a NiagaraSystem, which is responsible for the actual rendering of the particles. The renderer collects the positional data of all particles from the physics solver, formats it as required, and feeds it into the Niagara simulation. By decoupling the rendering logic from the physics engine, this setup ensures a clean separation of concerns and maintains modularity in the system.
The NiagaraSystem uses a built-in Niagara sprite renderer to visualize the particles, creating a circle sprite for each particle in the simulation. The sprite renderer is highly efficient, leveraging instancing to draw thousands of sprites with minimal draw calls. This instancing capability is crucial for maintaining performance in real-time simulations where the particle count can be very high. Once the particle data is passed to the NiagaraSystem, the system updates the sprites' positions based on the provided data, ensuring that the visual representation aligns seamlessly with the underlying physics simulation.
Additionally, the Niagara emitter used in the system is a GPU emitter, which means the position updates and rendering computations are offloaded almost entirely to the GPU. This approach significantly reduces the CPU workload and takes advantage of the GPU's parallel processing power to handle the rendering of thousands of particles efficiently. By relying on the GPU for these tasks, the system achieves high performance and responsiveness, even in complex or high-density particle scenarios. This GPU-based rendering pipeline, combined with the efficiency of the Niagara sprite renderer, ensures that the visual component of the simulation remains smooth and performant.
The main bottleneck in the simulation is clearly spatial queries and collision resolution. This can be optimized by using a more efficient spatial data structures and use more threads to perform the simulation. Currently everything (simulation, collisions) is done on the game thread.
Rendering seems to be very efficient thanks to excellent Niagara sprites rendering capabilities and GPU instancing.